--- title: OpenHSI Hardware Camera Implementations keywords: fastai sidebar: home_sidebar summary: "Hardware cameras" description: "Hardware cameras" nb_path: "nbs/02_cameras.ipynb" ---
{% raw %}
{% endraw %}

{% include tip.html content='This module can be imported using from openhsi.cameras import *' %} Wrapper class and example code for getting images from the OpenHSI. {% include tip.html content='To use the camera, you will need some calibration files. You can also generate these files following this guide which uses the calibrate module.' %}

{% raw %}
{% endraw %} {% raw %}
{% endraw %}

Webcams

This uses the OpenCV library to interface with a webcam including the one in your laptop. Mainly for testing but maybe this could be useful. The

{% raw %}

class WebCamera[source]

WebCamera(mode:str=None, n_lines:int=16, processing_lvl:int=-1, preserve_raw:bool=False, json_path:str=None, pkl_path:str=None, print_settings=False) :: OpenHSI

Interface for webcam to test OpenHSI functionality
{% endraw %} {% raw %}
{% endraw %} {% raw %}
#hide_output

with WebCamera(n_lines=64, processing_lvl = 0, 
               json_path="assets/cam_settings.json", pkl_path="assets/cam_calibration.pkl") as cam:
    cam.collect()
    fig = cam.show(hist_eq=True)
    
fig.opts(width=200)
{% endraw %}

You can see a picture I held up to the webcam.

{% raw %}
plt.subplots(figsize=(12,4))
plt.imshow(cam.dc.data[:,1,:],cmap="gray",aspect=0.3)
<matplotlib.image.AxesImage at 0x121a321c0>
{% endraw %}

XIMEA Camera

Used for the OpenHSI Camera Mark I with a Ximea detetor (with IMX252 sensor, e.g. MX031CG-SY).

Make sure you install the Ximea API beforehand in the instructions https://www.ximea.com/support/wiki/apis/Python

{% raw %}

class XimeaCamera[source]

XimeaCamera(xbinwidth:int=896, xbinoffset:int=528, exposure_ms:float=10, serial_num:str=None, n_lines:int=16, processing_lvl:int=-1, preserve_raw:bool=False, json_path:str=None, pkl_path:str=None, print_settings=False) :: OpenHSI

Core functionality for Ximea cameras
{% endraw %} {% raw %}
{% endraw %} {% raw %}
def run_ximea():
    with XimeaCamera(n_lines=128, processing_lvl = -1, pkl_path="",json_path='../assets/cam_settings_ximea.json') as cam:
        cam.start_cam()
        for i in tqdm(range(cam.n_lines)):
            cam.put(cam.get_img())
        cam.stop_cam()

%prun run_ximea()

#fig = cam.show(robust=True)    
#fig
{% endraw %}

Lucid Vision Labs Camera

Used for the OpenHSI Camera from Sydney Photonics.

Make sure you installthe Lucid Vision Labs Arena SDK and python api beforehand. These can be found here https://thinklucid.com/downloads-hub/

Any keyword-value pair arguments must match the those avaliable in settings file. LucidCamera expects the ones listed below:

  • binxy: number of pixels to bin in (x,y) direction
  • win_resolution: size of area on detector to readout (width, height)
  • win_offset: offsets (x,y) from edge of detector for a selective
  • exposure_ms: is the camera exposure time to use
  • pixel_format: format of pixels readout sensor, ie Mono8, Mono10, Mono10Packed, Mono12, Mono12Packed, Mono16
  • mac_addr: The mac address of the GigE sensor as string i.e. "1c:0f:af:01:7b:a0"
{% raw %}

class LucidCamera[source]

LucidCamera(n_lines:int=16, processing_lvl:int=-1, preserve_raw:bool=False, json_path:str=None, pkl_path:str=None, print_settings=False) :: OpenHSI

Core functionality for Lucid Vision Lab cameras

Any keyword-value pair arguments must match the those avaliable in settings file. LucidCamera expects the ones listed below:

- `binxy`: number of pixels to bin in (x,y) direction
- `win_resolution`: size of area on detector to readout (width, height)
- `win_offset`: offsets (x,y) from edge of detector for a selective
- `exposure_ms`: is the camera exposure time to use
- `pixel_format`: format of pixels readout sensor, ie Mono8, Mono10, Mono10p, Mono10Packed, Mono12, Mono12p, Mono12Packed, Mono16
- `mac_addr`: str = "1c:0f:af:01:7b:a0",
{% endraw %} {% raw %}
{% endraw %} {% raw %}
json_path='cals/OpenHSI-07/OpenHSI-07_settings_Mono8_bin1.json'
pkl_path='cals/OpenHSI-07/OpenHSI-07_calibration_Mono8_bin1.pkl'

with LucidCamera(n_lines=1000, 
                 processing_lvl = 2, 
                 pkl_path=pkl_path,json_path=json_path,
                 exposure_ms=10
                ) as cam:
    cam.collect()
    # cam.start_cam()
    # img = cam.get_img()
    # cam.stop_cam()
    cam.show(hist_eq=True)
     
# hv.Image(img, bounds=(0,0,*img.shape)).opts(xlabel="wavelength index",ylabel="cross-track",cmap="gray",title="test frame",width=400,height=400)
cam.show(hist_eq=True)
100%|██████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:11<00:00, 88.62it/s]
{% endraw %}

FLIR

Follow the install instructions for https://pypi.org/project/simple-pyspin/. This includes the Spinnaker SDK and the Python pyspin .whl file from https://flir.app.boxcn.net/v/SpinnakerSDK. {% include note.html content='PySpin only supports Python2.7/3.6-3.8' %} There are some additional settings:

  • win_resolution: size of area on detector to readout (width, height)
  • win_offset: offsets (x,y) from edge of detector for a selective
  • exposure_us: is the camera exposure time to use in microseconds
{% raw %}

class FlirCamera[source]

FlirCamera(n_lines:int=16, processing_lvl:int=-1, preserve_raw:bool=False, json_path:str=None, pkl_path:str=None, print_settings=False) :: OpenHSI

Interface for FLIR camera
{% endraw %} {% raw %}
{% endraw %} {% raw %}
json_path='assets/cam_settings_flir.json'
pkl_path='assets/cam_calibration_flir.pkl'

with FlirCamera(n_lines=1000, 
                 processing_lvl = 2, 
                 pkl_path=pkl_path,json_path=json_path,
                ) as cam:
    cam.collect()
    fig = cam.show(hist_eq=True)
    
fig
100%|██████████| 1000/1000 [00:12<00:00, 78.71it/s]
{% endraw %}